Goto

Collaborating Authors

 discussion paper


Discussion Paper: The Threat of Real Time Deepfakes

Frankovits, Guy, Mirsky, Yisroel

arXiv.org Artificial Intelligence

Generative deep learning models are able to create realistic audio and video. This technology has been used to impersonate the faces and voices of individuals. These ``deepfakes'' are being used to spread misinformation, enable scams, perform fraud, and blackmail the innocent. The technology continues to advance and today attackers have the ability to generate deepfakes in real-time. This new capability poses a significant threat to society as attackers begin to exploit the technology in advances social engineering attacks. In this paper, we discuss the implications of this emerging threat, identify the challenges with preventing these attacks and suggest a better direction for researching stronger defences.


UK FCA, BoE, and PRA Publish Discussion Paper on Adopting AI in Financial Services

#artificialintelligence

On October 11, the Bank of England (BoE), the Prudential Regulation Authority (PRA), and the UK Financial Conduct Authority (FCA) (together, the Supervisory Authorities) published a discussion paper (DP5/22) on the safe and responsible adoption of artificial intelligence (AI) in financial services (Discussion Paper). The Discussion Paper forms part of the Supervisory Authorities' AI-related program of works, including the AI Public Private Forum and is being considered in light of the UK government's efforts towards regulating AI. The purpose of the Discussion Paper is to provide a platform for assessing the desirability of regulating AI technology adoption in UK financial services by safeguarding each of the Supervisory Authorities' own objectives. The BoE's objectives are to maintain financial stability and support the UK government's economic policy. The PRA focuses on the promotion of safety, soundness, and competition for services provided by PRA-authorized firms and insurance firms, while the FCA's strategic objective is to ensure market integrity, effective competition, and protection of consumers in the UK financial system. The Supervisory Authorities consider it useful to distinguish what constitutes AI by either (1) providing a more precise legal definition of what AI is (and what it is not); or (2) viewing AI as part of a wider spectrum of analytical techniques with a range of characteristics for mapping out AI.


UK FCA, PRA, and BoE publish discussion paper (DP5/22) on AI and machine learning

#artificialintelligence

In the discussion paper, the UK financial supervisory authorities have not provided a new legal framework or their intended future approaches for regulating the use of AI and machine learning in financial services. However, they have assessed the benefits, risks and harms related to the use of AI, and the current legal framework that applies to AI in financial services. The UK financial services regulators, the Bank of England (BoE), the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) (together Supervisory Authorities) jointly published a discussion paper (DP5/22) on artificial intelligence (AI) and machine learning on 11 October 2022. The purpose of the discussion paper was to facilitate a public debate on the safe and responsible adoption of AI in UK financial services. The Supervisory Authorities have also raised discussion questions for stakeholder input, with the aim of understanding whether the current regulatory framework is sufficient to address the potential risks and harms associated with AI and how any additional intervention may support the safe and responsible adoption of AI in UK financial services.


Supervisory Authorities publish discussion paper on artificial intelligence

#artificialintelligence

The UK financial services regulators, the Bank of England (BoE), the Prudential Regulation Authority (PRA), and the Financial Conduct Authority (FCA) – together Supervisory Authorities – jointly published a discussion paper (DP5/22) on artificial intelligence (AI) and machine learning on 11 October 2022. The purpose of the discussion paper was to facilitate a public debate on the safe and responsible adoption of AI in UK financial services. The Supervisory Authorities have also raised discussion questions for stakeholder input, with the aim of understanding whether the current regulatory framework is sufficient to address the potential risks and harms associated with AI and how any additional intervention may support the safe and responsible adoption of AI in UK financial services. The discussion paper provides a platform for the Supervisory Authorities, experts and stakeholders to collaborate and jointly assess whether the current legal framework can adequately regulate AI technology by safeguarding each of the Supervisory Authorities' objectives while at the same time promoting innovation in UK financial services. This consultation occurs in parallel to the UK government's ongoing work in developing its own cross-sector approach to the regulation of AI technology and will therefore provide a valuable contribution to this broader policy debate.


The human factor in artificial intelligence

#artificialintelligence

Financial regulation is forever running to catch up with evolving technology. There are many examples of this: the Second Markets in Financial Instruments Directive (MiFID II) sought to make up ground on the increased electronification of markets since the introduction of MiFID I; policymakers in both the EU and the UK are at this very moment defining the regulatory perimeter around cryptoassets, more than a decade after the initial launch of bitcoin; and regulators first took action against runaway algorithms long before restrictions on algorithmic trading made it into regulatory rulebooks. Continuing this trend, on 11 October 2022, the Bank of England (BoE) and the UK Financial Conduct Authority (FCA) launched a joint discussion paper on how the UK regulators should approach the "safe and responsible" adoption of AI in financial services (FCA DP22/4 and BoE DP5/22) (the AI Discussion Paper), which is now open for responses. This follows the UK Government's Command Paper published in July 2022, announcing a "pro-innovation" approach to regulating AI (CP 728) across different sectors. One strong theme that comes out of the AI Discussion Paper is that, notwithstanding the potential benefits of AI in fostering innovation and reducing costs in financial services, the human factor is key to ensure that AI is governed and overseen responsibly and that potential negative impacts on clients and other stakeholders are mitigated appropriately. The fact that the regulators are consulting on bringing the oversight of AI expressly within the scope of the UK Senior Managers and Certifications Regime (SMCR) illustrates the importance of this human element, and that humans should continue to run the machines, rather than the other way around.


Artificial Intelligence in Medical Diagnosis - National Academy of Medicine

#artificialintelligence

This public webinar, hosted by the National Academy of Medicine (NAM) and the U.S. Government Accountability Office (GAO), will explore the promise and issues associated with using artificial intelligence (AI) in medical diagnosis. This webinar builds off a recent GAO technology assessment titled Artificial Intelligence in Health Care: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics and NAM Perspectives Discussion Paper titled Meeting the Moment: Addressing Barriers and Facilitating Clinical Adoption of Artificial Intelligence in Medical Diagnosis. Participants will hear an overview of and reactions from field experts to the GAO report and the NAM Discussion Paper. The GAO report discusses current and emerging machine learning (ML) medical diagnostic technologies for select diseases and provides a broad overview of the challenges that affect their development and adoption. The NAM Discussion Paper focuses primarily on deployment and presents a framework for evaluating and promoting provider and health system adoption of AI-diagnostic decision support (AI-DDS) tools while exploring intersection equity issues.


How artificial intelligence can deliver real value to companies

#artificialintelligence

After decades of extravagant promises and frustrating disappointments, artificial intelligence (AI) is finally starting to deliver real-life benefits to early-adopting companies. Retailers on the digital frontier rely on AI-powered robots to run their warehouses--and even to automatically order stock when inventory runs low. Utilities use AI to forecast electricity demand. A confluence of developments is driving this new wave of AI development. Computer power is growing, algorithms and AI models are becoming more sophisticated, and, perhaps most important of all, the world is generating once-unimaginable volumes of the fuel that powers AI--data.


Artificial Intelligence Work Group Project Australia

#artificialintelligence

The Final Report also makes specific recommendations for the introduction of legislation which regulates the use of facial recognition and other biometric technology, and for a moratorium on the use of this technology in AI-informed decision-making until such legislation is enacted. The recommendations of the AHRC have been submitted to the Australian Government. The Australian Government has the ability to determine whether to adopt the recommendations of the Report or not. The adoption of the AHRC's recommendations for the introduction of specific legislation governing the use of AI would signal a change in the approach to the regulation of AI and other emerging technologies that has been adopted in Australia to date. Free data access is an issue in the use of AI tools in the provision of legal services in Australia. The success of an AI tool will be determined by the size and diversity of the sample data which is used to train that tool. There are a number of factors that contribute to free data access in Australia and generally these factors apply across the spectrum of different categories of AI tools discussed in question 2 (being litigation, transactional and knowledge management tools).


How Will Health Care Regulators Address Artificial Intelligence?

#artificialintelligence

Policymakers around the world are developing guidelines for use of artificial intelligence in health care. Baymax, the robotic health aide and unlikely hero from the movie Big Hero 6, is an adorable cartoon character, an outlandish vision of a high-tech future. But underlying Baymax's character is the very realistic concept of an artificial intelligence (AI) system that can be applied to health care. As AI technology advances, how will regulators encourage innovation while protecting patient safety? AI does not have a precise definition, but the term generally describes machines that have the capacity to process and respond to stimulation in a manner similar to human thought processes.


What's ahead for AI and Machine Learning in healthcare?

#artificialintelligence

In 2019, we saw increased interest and adoption of machine learning (ML) and artificial intelligence (AI) technology in healthcare. Organizations have been piloting solutions that range from helping diagnose patients, to ensuring the privacy of their data. While the industry is beginning to see some benefits from these tools, many end-users are starting to ask important questions like: how does the tool work, or where are my data stored? Similarly, in the last year, we have also seen organizations increasingly send and store their data at third-party vendors instead of on-premises. The combination of these two trends has raised concerns about data protection and the vendor's appropriate use of data.